66 research outputs found

    Social learning strategies modify the effect of network structure on group performance

    Full text link
    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines

    The Risks We Dread: A Social Circle Account

    Get PDF
    What makes some risks dreadful? We propose that people are particularly sensitive to threats that could kill the number of people that is similar to the size of a typical human social circle. Although there is some variability in reported sizes of social circles, active contact rarely seems to be maintained with more than about 100 people. The loss of this immediate social group may have had survival consequences in the past and still causes great distress to people today. Therefore we hypothesize that risks that threaten a much larger number of people (e.g., 1000) will not be dreaded more than those that threaten to kill “only” the number of people typical for social circles. We found support for this hypothesis in 9 experiments using different risk scenarios, measurements of fear, and samples from different countries. Fear of risks killing 100 people was higher than fear of risks killing 10 people, but there was no difference in fear of risks killing 100 or 1000 people (Experiments 1–4, 7–9). Also in support of the hypothesis, the median number of deaths that would cause maximum level of fear was 100 (Experiments 5 and 6). These results are not a consequence of lack of differentiation between the numbers 100 and 1000 (Experiments 7 and 8), and are different from the phenomenon of “psychophysical numbing” that occurs in the context of altruistic behavior towards members of other communities rather than in the context of threat to one's own community (Experiment 9). We discuss several possible explanations of these findings. Our results stress the importance of considering social environments when studying people's understanding of and reactions to risks

    Seeing through the eyes of the respondent: an eye-tracking study on survey question comprehension

    Full text link
    "To ensure that the data obtained through surveys are reliable and lead to valid conclusions, respondents must comprehend the questions as intended by the survey designer and find it easy to answer them accurately. Applying a psycholinguistic perspective to survey question design, Lenzner, Kaczmirek, and Lenzner (2010) identified seven text features that undermine reading comprehension and thus increase the cognitive burden imposed by survey questions. In this study, we extend the earlier findings by Lenzner et al. (2010) in two ways. First, we use eye tracking as a more direct method to examine whether comprehension is indeed impeded by these text features. While response time is a valuable indicator of the overall cognitive effort required to answer a survey question, it does not enable us to distinguish between the time required to read and understand a question (comprehension stage) and the time it takes to provide an answer (including retrieval, judgment, and response selection). In contrast, recording respondents' eye movements while answering a Web survey allows us to identify the specific parts of the question they struggle with during the comprehension stage." (author's abstract

    Collective moderation of hate, toxicity, and extremity in online discussions

    Full text link
    How can citizens moderate hate, toxicity, and extremism in online discourse? We analyze a large corpus of more than 130,000 discussions on German Twitter over the turbulent four years marked by the migrant crisis and political upheavals. With a help of human annotators, language models, machine learning classifiers, and longitudinal statistical analyses, we discern the dynamics of different dimensions of discourse. We find that expressing simple opinions, not necessarily supported by facts but also without insults, relates to the least hate, toxicity, and extremity of speech and speakers in subsequent discussions. Sarcasm also helps in achieving those outcomes, in particular in the presence of organized extreme groups. More constructive comments such as providing facts or exposing contradictions can backfire and attract more extremity. Mentioning either outgroups or ingroups is typically related to a deterioration of discourse in the long run. A pronounced emotional tone, either negative such as anger or fear, or positive such as enthusiasm and pride, also leads to worse outcomes. Going beyond one-shot analyses on smaller samples of discourse, our findings have implications for the successful management of online commons through collective civic moderation

    Measuring Risk Literacy: The Berlin Numeracy Test

    Get PDF
    We introduce the Berlin Numeracy Test, a new psychometrically sound instrument that quickly assesses statistical numeracy and risk literacy. We present 21 studies (n=5336) showing robust psychometric discriminability across 15 countries (e.g., Germany, Pakistan, Japan, USA) and diverse samples (e.g., medical professionals, general populations, Mechanical Turk web panels). Analyses demonstrate desirable patterns of convergent validity (e.g., numeracy, general cognitive abilities), discriminant validity (e.g., personality, motivation), and criterion validity (e.g., numerical and nonnumerical questions about risk). The Berlin Numeracy Test was found to be the strongest predictor of comprehension of everyday risks (e.g., evaluating claims about products and treatments; interpreting forecasts), doubling the predictive power of other numeracy instruments and accounting for unique variance beyond other cognitive tests (e.g., cognitive reflection, working memory, intelligence). The Berlin Numeracy Test typically takes about three minutes to complete and is available in multiple languages and formats, including a computer adaptive test that automatically scores and reports data to researchers (www.riskliteracy.org). The online forum also provides interactive content for public outreach and education, and offers a recommendation system for test format selection. Discussion centers on construct validity of numeracy for risk literacy, underlying cognitive mechanisms, and applications in adaptive decision support

    Extending the concept of job withdrawal: Identifying, predicting, and understanding adaptation to work

    Get PDF
    Thesis (B.S.) in Liberal Arts and Sciences -- University of Illinois at Urbana-Champaign, 1987.Bibliography: leaves 42-46.Microfiche of typescript. [Urbana, Ill.]: Photographic Services, University of Illinois, U of I Library, [1987]. 2 microfiches (81 frames): negative

    Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers

    Full text link
    Abstract Background Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients’ risk perception and leads to better informed decision making. This paper summarises current “best practices” in communication of evidence-based numeric outcomes for developers of patient decision aids (PtDAs) and other health communication tools. Method An expert consensus group of fourteen researchers from North America, Europe, and Australasia identified eleven main issues in risk communication. Two experts for each issue wrote a “state of the art” summary of best evidence, drawing on the PtDA, health, psychological, and broader scientific literature. In addition, commonly used terms were defined and a set of guiding principles and key messages derived from the results. Results The eleven key components of risk communication were: 1) Presenting the chance an event will occur; 2) Presenting changes in numeric outcomes; 3) Outcome estimates for test and screening decisions; 4) Numeric estimates in context and with evaluative labels; 5) Conveying uncertainty; 6) Visual formats; 7) Tailoring estimates; 8) Formats for understanding outcomes over time; 9) Narrative methods for conveying the chance of an event; 10) Important skills for understanding numerical estimates; and 11) Interactive web-based formats. Guiding principles from the evidence summaries advise that risk communication formats should reflect the task required of the user, should always define a relevant reference class (i.e., denominator) over time, should aim to use a consistent format throughout documents, should avoid “1 in x” formats and variable denominators, consider the magnitude of numbers used and the possibility of format bias, and should take into account the numeracy and graph literacy of the audience. Conclusion A substantial and rapidly expanding evidence base exists for risk communication. Developers of tools to facilitate evidence-based decision making should apply these principles to improve the quality of risk communication in practice.http://deepblue.lib.umich.edu/bitstream/2027.42/116070/1/12911_2013_Article_751.pd

    Beyond collective intelligence: Collective adaptation.

    No full text
    corecore